Picturing Algorithmic Surveillance: The Politics of Facial Recognition Systems

نویسندگان

  • Lucas D. Introna
  • David Wood
چکیده

This paper opens up for scrutiny the politics of algorithmic surveillance through an examination of Facial Recognition Systems (FRS’s) in video surveillance, showing that seemingly mundane design decisions may have important political consequences that ought to be subject to scrutiny. It first focuses on the politics of technology and algorithmic surveillance systems in particular: considering t he broad politics of technology; the nature of algorithmic surveillance and biometrics, claiming that software algorithms are a particularly important domain of techno-politics; and finally considering both the growth of algorithmic biometric surveillance and the potential problems with such systems. Secondly, it gives an account of FRS’s, the algorithms upon which they are based, and the biases embedded therein. In the third part, the ways in which these biases may manifest itself in real world implementation of FRS’s are outlined. Finally, some policy suggestions for the future development of FRS’s are made; it is noted that the most common critiques of such systems are based on notions of privacy which seem increasingly at odds with the world of automated systems. Introduction: the circulation of faces In a post 9/11 world security has become a big question for those feeling vulnerable. As in so many instances in social history the answer to this vulnerability is sought in a sort of certainty rooted in surveillance (Lyon, 1994, 2001, 2002; Dandeker, 1990). It is argued that through surveillance and early detection the problem can be solved. Security can be secured. Surveillance is a powerful technology for social control, however, when surveillance becomes digitised then a there is a “step change in power, intensity and scope” (Graham and Wood, 2003). Digitisation permits the use of software algorithms (mathematical instructions –see Section 2) for automated identification of human biometrics (a bodily trace see Section 2) With a biometric it is very difficult, if not impossible, for any individual to disassociate oneself (or 1 Centre for the Study of Technology and Organisation, Lancaster University Management School, mailto:[email protected] 2 Global Urban Research Unit (GURU), School of Architecture Planning and Landscape, University of Newcastle. mailto:[email protected] Introna and Wood: Picturing Algorithmic Surveillance Surveillance & Society 2(2/3) 178 be alienated) from one’s biometric – in a sense you are your biometric (Van der Ploeg, 2002). Thus, if there is a match between your body and your biometric certainty over identity can be established. However effective surveillance also needs to be subtle, to be insinuated into the context of everyday life. Indeed in the security world the perfect unobtrusive biometric is considered the ‘holy grail’. It is therefore not surprising that facial recognition system (FRS) have become a prime focus for the security establishment (Kopel & Krause, 2003). Not only are they relatively inexpensive, and supposedly effective, they require no involvement from their targets. Unlike other biometrics facial recognition can operate anonymously in the background. The targets do not need to surrender their face image, as they would their fingerprint, or their iris scan. A face can be captured and (de)coded without the consent or participation from those being targeted. However, this ‘captured’ face image is only of use if it can be matched with an identifier. It requires a database of face images with associated identities. Unlike fingerprints or DNA samples, which are only collected when there is a reasonable level of suspicion of a crime, face images are routinely collected in society by a variety of institutions, such as when we apply for a driving licence, or a passport, or a library card, etc. It is the most common biometric in use by humans to identify other humans. Indeed, in any western society, if one would somehow cover, or be seen to attempt to disguise one’s face, then there is almost an immediate assumption of guilt. One could almost say that there is an implicit common agreement to reveal our faces to others as a condition for ongoing social order. Thus, we tend to reveal our face to others and they to us. However, it seems that such an agreement only operates in a local and situated manner, as part of the social relationships we control. We would find it unacceptable if a stranger would photograph our face for no apparent reason. On the other hand we don’t find it unacceptable to surrender our faces for the regulation of privileges—as long as we are in control of its use and circulation. In most cases it is used in the moment of authentication (by means of visual comparison) and then forgotten. However, what happens if our faces are collected anonymously, encoded, and start to circulate in an invisible network, even if it is for seemingly mundane reasons? When our face “becomes a bar code”, in the words of Agre (2003). This seems not what we have in mind when we reveal our faces? It seems to us that the presence of FRS’s may indeed be changing the implied relationships we assume when facing others. This concern becomes even more acute if we start to ‘unpack’ these algorithms to discover that to the algorithms ‘all faces are not equal’. It is our contention that FRS’s is a very powerful and ambiguous technology for social control. As such it requires much more scrutiny than it has had up to now. The purpose of this paper is to open up for scrutiny the politics of facial recognition technology and its use in ‘smart’ CCTV. We aim to show that seemingly mundane design decisions may have important political consequences that ought to be subject to scrutiny. This paper is one step in that direction. It is structured as follows: the first section will focus on the politics of technology and algorithmic surveillance systems in particular, considering first the broad politics of technology, then explaining the nature of algorithmic surveillance and biometrics, claiming that software algorithms are a particularly important domain of technopolitics; and finally considering both the growth of algorithmic biometric surveillance and the Introna and Wood: Picturing Algorithmic Surveillance Surveillance & Society 2(2/3) 179 potential problems with such systems. In the second section, we will give an account of FRS’s, the algorithms they are based upon, and the biases embedded therein. In the third part, we will discuss the ways in which these biases may manifest itself in real world implementation of FRS’s. Finally, we will make some policy suggestion for the future development of FRS’s; it should be noted that the most common critiques of such systems are based on notions of privacy which seem increasingly at odds with the world of automated systems. 1. The Politics of Technology The Micro-Politics of the Artefact Technology is political (Winner, 1980). By this we mean that technology, by its very design, includes certain interests and excludes others. It is mostly an implicit politics, part of a mundane process of trying to solve practical problems. For example the ATM bank machine assumes a particular person in front of it. It assumes a person that is able to see the screen, read it, remember and enter a PIN code, etc. It is not difficult to imagine a whole section of society that does not conform with this assumption. If you are blind, in a wheelchair, have problem remembering, or unable to enter a PIN, because of disability, then your interest in accessing your account can be excluded by the ATM design. This exclusion of interests may not be obvious to the designers of ATMs as they may see their task as trying simply to solve a basic problem of making banking transactions more efficient for the ‘average’ customer doing average transactions. And they are mostly right — but if they are not, then their biases can become profoundly stubborn. These systems often seem like devices for surveillance and social control (in the sense of Foucault’s dispositif panoptique), but as Lianos (2001, 2003) has recently pointed out, they are not designed with the monitoring and control of the human subject directly in mind, rather this is a potential (or secondary) function of systems for ensuring flow. Nevertheless the binary effects are in some senses quite irreversible. Where does the excluded go to appeal when they are faced with a stubborn and mute object such as an ATM? Maybe they can work around it, by going into the branch for example. This may be possible. However, this exclusion becomes all the more significant because of the political economic context in which these dispositifs exists and which they help to transform, for example if banks start to close branches or charge for an over-the-counter transaction (as is happening). Thus, as the micropolitics of the ATM becomes tied to, and multiplied through other exclusionary practices, what seems to be a rather trivial injustice soon may multiply into what may seem to be as an coherent and intentional strategy of exclusion (Introna and Nissenbaum, 2000). Yet there is often nobody there that ‘authored’ it as such (Foucault, 1975; Kafka, 1925). This paper will show how such an ‘unauthored’ strategy may be emerging in facial recognition technology. Thus, the politics of technology is more than the politics of this or that artefact. Rather these artefacts function as nodes, or links, in a dynamic socio-technical network, or collective, kept in place by a multiplicity of artefacts, agreements, alliances, conventions, translations, procedures, threats, and so forth: in short by relationships of power and discipline (Callon 1986, 1991). Some are stable, even irreversible; some are dynamic and fragile. Analytically we can isolate and describe these networks (see Law 1991, for examples). However, as we survey the landscape of networks we cannot locate, in any obvious manner, where they begin nor where Introna and Wood: Picturing Algorithmic Surveillance Surveillance & Society 2(2/3) 180 they end. Indeed we cannot with any degree of certainty separate the purely social from the purely technical, cause from effect, designer from user, winners from losers, and so on. In these complex and dynamic socio-technical networks ATMs, doors, locks, keys, cameras, algorithms, etc. — function as political ‘locations’ where values and interests are negotiated and ultimately ‘inscribed’ into the very materiality of the things themselves—thereby rendering these values and interests more or less permanent (Akrich, 1992; Latour, 1991). Through these inscriptions, which may be more or less successful, those that encounter and use these inscribed artefacts become, wittingly or unwittingly, enrolled into particular programmes, or scripts for action. Neither the artefacts nor those that draw upon them simply except these inscriptions and enrolments as inevitable or unavoidable. In the flow of everyday life artefacts often get lost, break down, and need to be maintained. Furthermore, those that draw upon them use them in unintended ways, ignoring or deliberately ‘misreading’ the script the objects may endeavour to impose. Nevertheless, to the degree that these enrolments are successful, the consequences of such enrolments can result in more or less profound political ‘ideologies’ that ought to be scrutinised. We would claim that the politics of artefacts is much more mundane and much more powerful than most other politics, yet it often evades our scrutiny. It is with this in mind that we can introduce the politics of algorithmic surveillance 2. Algorithmic surveillance What is an Algorithm? The word ‘algorithm’ derives from the hugely influential 9 Century Muslim mathematician, Muhammed ibn Musa al-Khwarizmi, who produced the first extant text on algebra a term which also originates with him. 12 Century Christian scholars used al-Khwarizmi’s name, latinized as ‘algorismus’ to differentiate his method of calculation from commonly used methods like the abacus or counting tables. An algorithm is simply a mathematical, or logical, term for a set of instructions. Algorithms can be divided into trivial and non-trivial types, the former being sets of instructions that are only applicable to a specific situation, or a task that needs to further explanation, the latter being instructions that will provide answers given any compatible input. Texts on algorithmics often give the example of a recipe as a useful metaphor for understanding the concept, though in fact this is slightly inaccurate: the recipe is more like software (see below). Algorithms form the basis of modern mathematics and most importantly here, the foundation of computing. However in themselves algorithms are not accessible to computers, they need to be translated into a form that computers have been programmed to understand. This process, known as coding (or hacking) produces software. Software is essentially composed of many coded algorithms linked together to produce a desired output from the hardware. In the metaphor mentioned above, the software is therefore the recipe. Computer hardware is not in itself usually algorithmic, rather it is composed of many physical switches (however small) which 3 For more on the history of algorithms, see Chabert et al. (1999). Introna and Wood: Picturing Algorithmic Surveillance Surveillance & Society 2(2/3) 181 have two positions, on/off, 1/0 etc. These switches then respond to instructions from the software, once it has been translated (assembled) into binary machine code. Algorithmic Surveillance The term ‘algorithmic surveillance’ was coined by Norris and Armstrong (1999) in their pioneering book, The Maximum Surveillance Society. It is in literal terms surveillance that makes use of automatic step-by-step instructions. However it is used specifically to refer to surveillance technologies that make use of computer systems to provide more than the raw data observed. This can range from systems that classify and store simple data, through more complex systems that compare the captured data to other data and provide matches, to systems that attempt to predict events based on the captured data. Thus many surveillance technologies currently in use have algorithmic aspects, but not all. A citycentre CCTV system that provides images that are watched and analysed by guards or police is not algorithmic. If such a system contains a computer which compares the faces of people captured by the cameras which those of known offenders, then it is. If typists enter in the health details of a patient into the Health Service computer, then it is algorithmic to a limited extent in that software determines the extent of the information that can be entered, however it becomes what is usually understood as algorithmic surveillance when, for example, a program is installed which compares the patient records against signs of particular disease risk-factors, and defines or categorises patients automatically. Algorithmic Surveillance in Practice There are now many algorithmic surveillance systems which watch over almost all aspects of this planet and beyond (if one includes systems like the Hubble deep-space telescope and the Cassini project). Many of these systems monitor the non-human (water and electricity flow etc.) and are thus left largely unconsidered by social researchers; although there was a temporary wave of concern prior to the year 2000 with the fear of the so-called Millenium Bug that apparently had the potential to cause many of these systems to fail or malfunction. There are many algorithmic systems which have a hybrid monitoring function, for example the recordings of cash withdrawals from ATM machines, credit and debit card transactions and purchases in stores to which we refered above. There are also systems which algorithmically record data and sort about things, but things which are related to human beings for example, car number plate recognition. Again these systems do indirectly monitor people, but there is no necessary correlation between a particular human and a number plate although this is quite likely and in some cases legally restricted. Systems like movement recognition can be used both for nonhuman and inhuman things and human beings depending on the circumstances and details of the technology used. Examples of these again are often about flow management for example, the Prismatica/Cromatica movement-recognition developed for the London Underground to ensure the movement of passengers is efficient and safe. The original system as it turned out in operation had the unintended consequence of being able to detect potential suicides – as it was observed that they remained relatively motionless on 4 For more on algorithms and computing, see Harel (1992). Introna and Wood: Picturing Algorithmic Surveillance Surveillance & Society 2(2/3) 182 the platform for longer periods than most before jumping (Norris 2003) – but again this an indirect consequence of the behaviour of human beings observed through a system of flow management. These systems are extremely interesting because, whatever their intention, they do transform the context of social interaction in quite fundamental ways creating what Lianos and Douglas (2000) call Automated Socio-Technical Environments (ASTEs). Recent years have however seen the largely experimental introduction of automated systems for the direct monitoring of human beings based on physical traits unique to the individual. These ‘biometric’ identification systems include: gait recognition; fingerprint and palmprint recognition; facial recognition; and iris recognition. Each has its own technical merits and drawbacks and each is suitable for different uses in varying physical environments. The oldest of these are handgeometry recognition systems which internally have remained largely unchanged since the 1970s, and which still work very well in environments where access is restricted to a relatively small database of people. The Growth of Algorithmic S urveillance Before the attacks of September 11 2001, the biometrics industry was expanding steadily but not spectacularly and was also facing increasing opposition from civil rights and privacy groups. In the immediate aftermath of the attacks, particularly in the USA, there was a general assumption that rights arguments would lose out during what one of us has elsewhere characterised as a period of ‘surveillance surge’ (Wood, Konvitz and Ball 2003) wherein those with an interest in new surveillance technologies promote them to a polity shocked enough by events not to consider their efficiency, effectiveness or wider implications as carefully as they might normally do. Zureik (2004)6 shows that within a few weeks of the terrorist attacks almost 17 bills were introduced in the United States Congress, including measures “to tighten immigration, visa, and naturalization procedures, allow tax benefits to companies that use biometrics, and check employee background at border and maritime check points.”

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Hybridization of Facial Features and Use of Multi Modal Information for 3D Face Recognition

Despite of achieving good performance in controlled environment, the conventional 3D face recognition systems still encounter problems in handling the large variations in lighting conditions, facial expression and head pose The humans use the hybrid approach to recognize faces and therefore in this proposed method the human face recognition ability is incorporated by combining global and local ...

متن کامل

Facial Expression Recognition Based on Anatomical Structure of Human Face

Automatic analysis of human facial expressions is one of the challenging problems in machine vision systems. It has many applications in human-computer interactions such as, social signal processing, social robots, deceit detection, interactive video and behavior monitoring. In this paper, we develop a new method for automatic facial expression recognition based on facial muscle anatomy and hum...

متن کامل

A comprehensive experimental comparison of the aggregation techniques for face recognition

In face recognition, one of the most important problems to tackle is a large amount of data and the redundancy of information contained in facial images. There are numerous approaches attempting to reduce this redundancy. One of them is information aggregation based on the results of classifiers built on selected facial areas being the most salient regions from the point of view of classificati...

متن کامل

Facial Expression Recognition Based on Structural Changes in Facial Skin

Facial expressions are the most powerful and direct means of presenting human emotions and feelings and offer a window into a persons’ state of mind. In recent years, the study of facial expression and recognition has gained prominence; as industry and services are keen on expanding on the potential advantages of facial recognition technology. As machine vision and artificial intelligence advan...

متن کامل

Local gradient pattern - A novel feature representation for facial expression recognition

Many researchers adopt Local Binary Pattern for pattern analysis. However, the long histogram created by Local Binary Pattern is not suitable for large-scale facial database. This paper presents a simple facial pattern descriptor for facial expression recognition. Local pattern is computed based on local gradient flow from one side to another side through the center pixel in a 3x3 pixels region...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004